520 research outputs found
Numerical modeling of fluid flow in porous media and in driven colloidal suspensions
This article summarizes some of our main efforts performed on the computing facilities provided by the high performance computing centers in Stuttgart and Karlsruhe. At first, large scale lattice Boltzmann simulations are utilized to support resolution dependent analysis of geometrical and transport properties of a porous sandstone model. The second part of this report focuses on Brownian dynamics simulations of optical tweezer experiments where a large colloidal particle is dragged through a polymer solution and a colloidal crystal. The aim of these simulations is to improve our understanding of structuring effects, jamming behavior and defect formation in such colloidal systems
Sparse robot swarms: Moving swarms to real-world applications
Robot swarms are groups of robots that each act autonomously based on only local perception and coordination with neighbouring robots. While current swarm implementations can be large in size (e.g. 1000 robots), they are typically constrained to working in highly controlled indoor environments. Moreover, a common property of swarms is the underlying assumption that the robots act in close proximity of each other (e.g. 10 body lengths apart), and typically employ uninterrupted, situated, close-range communication for coordination. Many real-world applications, including environmental monitoring and precision agriculture, however, require scalable groups of robots to act jointly over large distances (e.g. 1000 body lengths), rendering the use of dense swarms impractical. Using a dense swarm for such applications would be invasive to the environment and unrealistic in terms of mission deployment, maintenance and post-mission recovery. To address this problem, we propose the sparse swarm concept, and illustrate its use in the context of four application scenarios. For one scenario, which requires a group of rovers to traverse, and monitor, a forest environment, we identify the challenges involved at all levels in developing a sparse swarm—from the hardware platform to communication-constrained coordination algorithms—and discuss potential solutions. We outline open questions of theoretical and practical nature, which we hope will bring the concept of sparse swarms to fruition
Scale-up of precipitation processes
This thesis concerns the scale-up of precipitation processes aimed at predicting
product particle characteristics. Although precipitation is widely used in the chemical
and pharmaceutical industry, successful scale-up is difficult due to the absence of a
validated methodology. It is found that none of the conventional scale-up criteria
reported in the literature (equal power input per unit mass, equal tip speed, equal stirring
rate) is capable of predicting the experimentally observed effects of the mixing
conditions on kinetic rates and particle characteristics. As a result of high gradients in
the supersaturation field during precipitation, particularly in the feed zone, high local
gradients in the nucleation rate are to be expected.
In this thesis, a compartmental mixing model (Segregated Feed Model SFM)
linked to the population balance is proposed for scaling up both continuous and
semibatch precipitation processes, and is validated with experiments on different scales.
Experiments were carried out using two chemical systems (calcium oxalate
CaC₂O₄ and calcium carbonate CaCO₃), varying the residence time/feed time, feed
concentration, feed point position, impeller type, feed tube diameter and stirring rate in
geometrically similar reactors ranging from 0.3 to 301.
A new procedure is introduced in order to solve the inverse problem for
determination of the kinetic parameters for nucleation, growth, disruption and
agglomeration from the particle size distributions obtained in the continuous laboratory-scale
experiments. This method, where the kinetic rates were extracted separately and
sequentially from the particle size distribution, was found to be a reliable alternative to
the conventional simultaneous estimation of all kinetic parameters from the distribution.
Using the kinetic parameters extracted from the laboratory-scale experiments,
the population balance is solved within the Segregated Feed Model. The local mixing
parameters also required for solving the SFM are obtained from a sliding mesh
Computational Fluid Dynamics (CFD) model. These are used to specify the different
micromixing and mesomixing conditions in the feed and bulk zones of the reactor.
The model accurately predicts the mean size, coefficient of variation and
nucleation rate on different scales for different process and mixing conditions in both
continuous and semibatch mode of operation. Furthermore, the model confirms the
observed greater effect of mixing on product particle characteristics in semibatch than in
continuous operation. This is thought to be due to direct mixing of the feed solution in
semibatch operation with the other component already present in the reactor.
The methodology proposed here for the scale-up of precipitation processes is
very versatile and computationally efficient. It combines the advantages of both a CFD
and a population balance approach without having to solve the equations together,
which is currently still impracticable due to the excessive computational demand and
simulation time required
Multichannel Anomaly of the Resonance Pole Parameters Resolved
Inspired by anomalies which the standard scattering matrix pole-extraction
procedures have produced in a mathematically well defined coupled-channel
model, we have developed a new method based solely on the assumption of
partial-wave analyticity. The new method is simple and applicable not only to
theoretical predictions but to the empirical partial-wave data as well. Since
the standard pole-extraction procedures turn out to be the lowest-order term of
the proposed method the anomalies are understood and resolved.Comment: 5 page
The detailed mechanism of the eta production in pp scattering up to the Tlab = 4.5 GeV
Contrary to very early beliefs, the experimental cross section data for the
eta production in proton-proton scattering are well described if pi and only
eta meson exchange diagrams are used to calculate the Born term. The inclusion
of initial and final state interactions is done in the factorization
approximation by using the inverse square of the Jost function. The two body
Jost functions are obtained from the S matrices in the low energy effective
range approximation. The danger of double counting in the p-eta final state
interaction is discussed. It is shown that higher partial waves in
meson-nucleon amplitudes do not contribute significantly bellow excess energy
of Q=100 MeV. Known difficulties of reducing the multi resonance model to a
single resonance one are illustrated.Comment: 10 pages, 5 figures, corrected typos in relation (3), changed content
(added section with differential cross sections
On quaternary complex Hadamard matrices of small orders
One of the main goals of design theory is to classify, characterize and count
various combinatorial objects with some prescribed properties. In most cases,
however, one quickly encounters a combinatorial explosion and even if the
complete enumeration of the objects is possible, there is no apparent way how
to study them in details, store them efficiently, or generate a particular one
rapidly. In this paper we propose a novel method to deal with these
difficulties, and illustrate it by presenting the classification of quaternary
complex Hadamard matrices up to order 8. The obtained matrices are members of
only a handful of parametric families, and each inequivalent matrix, up to
transposition, can be identified through its fingerprint.Comment: 7 page
Optimal signal states for quantum detectors
Quantum detectors provide information about quantum systems by establishing
correlations between certain properties of those systems and a set of
macroscopically distinct states of the corresponding measurement devices. A
natural question of fundamental significance is how much information a quantum
detector can extract from the quantum system it is applied to. In the present
paper we address this question within a precise framework: given a quantum
detector implementing a specific generalized quantum measurement, what is the
optimal performance achievable with it for a concrete information readout task,
and what is the optimal way to encode information in the quantum system in
order to achieve this performance? We consider some of the most common
information transmission tasks - the Bayes cost problem (of which minimal error
discrimination is a special case), unambiguous message discrimination, and the
maximal mutual information. We provide general solutions to the Bayesian and
unambiguous discrimination problems. We also show that the maximal mutual
information has an interpretation of a capacity of the measurement, and derive
various properties that it satisfies, including its relation to the accessible
information of an ensemble of states, and its form in the case of a
group-covariant measurement. We illustrate our results with the example of a
noisy two-level symmetric informationally complete measurement, for whose
capacity we give analytical proofs of optimality. The framework presented here
provides a natural way to characterize generalized quantum measurements in
terms of their information readout capabilities.Comment: 13 pages, 1 figure, example section extende
Quantitative analysis of numerical estimates for the permeability of porous media from lattice-Boltzmann simulations
During the last decade, lattice-Boltzmann (LB) simulations have been improved
to become an efficient tool for determining the permeability of porous media
samples. However, well known improvements of the original algorithm are often
not implemented. These include for example multirelaxation time schemes or
improved boundary conditions, as well as different possibilities to impose a
pressure gradient. This paper shows that a significant difference of the
calculated permeabilities can be found unless one uses a carefully selected
setup. We present a detailed discussion of possible simulation setups and
quantitative studies of the influence of simulation parameters. We illustrate
our results by applying the algorithm to a Fontainebleau sandstone and by
comparing our benchmark studies to other numerical permeability measurements in
the literature.Comment: 14 pages, 11 figure
Label-free electrochemical monitoring of DNA ligase activity
This study presents a simple, label-free electrochemical technique for the monitoring of DNA ligase activity. DNA ligases are enzymes that catalyze joining of breaks in the backbone of DNA and are of significant scientific interest due to their essential nature in DNA metabolism and their importance to a range of molecular biological methodologies. The electrochemical behavior of DNA at mercury and some amalgam electrodes is strongly influenced by its backbone structure, allowing a perfect discrimination between DNA molecules containing or lacking free ends. This variation in electrochemical behavior has been utilized previously for a sensitive detection of DNA damage involving the sugar-phosphate backbone breakage. Here we show that the same principle can be utilized for monitoring of a reverse process, i.e., the repair of strand breaks by action of the DNA ligases. We demonstrate applications of the electrochemical technique for a distinction between ligatable and unligatable breaks in plasmid DNA using T4 DNA ligase, as well as for studies of the DNA backbone-joining activity in recombinant fragments of E. coli DNA ligase
- …